Talk given for the Redwood Center for Theoretical Neuroscience on February 13, 2008. Speaker is Marcelo Magnasco of Rockefeller University.
Abstract. Auditory neurons preserve exquisite temporal information about sound features, but we do not know how the brain uses this information to parse the rapidly changing sounds of the natural world. A simple argument for making effective use of temporal information in the auditory nerve leads us to consider the reassignment class of time-frequency representations as a potential model of auditory processing. We show that these representations are sparse even for spectrally dense signals. Many details of complex sounds that are virtually undetectable in standard sonograms are readily perceptible and visible in reassignment; as the only known class of time-frequency representations that is always ‘‘in focus’’ this methodology may help explain the remarkable acuity of auditory perception. We also consider how to determine, experimentally, when a neural code embeds information in the detailed timing of spikes. We show that standard ``spike-triggered'' receptive field constructions are inadequate to extract this level of information and present a new method, ``differential reverse correlations'', based on correlating small changes in spike timing due to small changes to the stimulus.